4 research outputs found

    Optimizing Alzheimer's disease prediction using the nomadic people algorithm

    Get PDF
    The problem with using microarray technology to detect diseases is that not each is analytically necessary. The presence of non-essential gene data adds a computing load to the detection method. Therefore, the purpose of this study is to reduce the high-dimensional data size by determining the most critical genes involved in Alzheimer's disease progression. A study also aims to predict patients with a subset of genes that cause Alzheimer's disease. This paper uses feature selection techniques like information gain (IG) and a novel metaheuristic optimization technique based on a swarm’s algorithm derived from nomadic people’s behavior (NPO). This suggested method matches the structure of these individuals' lives movements and the search for new food sources. The method is mostly based on a multi-swarm method; there are several clans, each seeking the best foraging opportunities. Prediction is carried out after selecting the informative genes of the support vector machine (SVM), frequently used in a variety of prediction tasks. The accuracy of the prediction was used to evaluate the suggested system's performance. Its results indicate that the NPO algorithm with the SVM model returns high accuracy based on the gene subset from IG and NPO methods

    Early Alzheimer's Disease Detection Using Different Techniques Based on Microarray Data: A Review

    No full text
    Alzheimer's Disease (AD) is a degenerative disease of the brain that results in memory loss due to the death of brain cells. Alzheimer's disease is more common as people get older. Memory loss happens over time, and as a result, the person loses the ability to react appropriately to their surroundings. Microarray technology has emerged as a new trend in genetic research, with many researchers utilizing it to look at the changes in gene expression in particular organisms. Microarray experiments can be used in various ways in the medical field, including the prediction and detection of disease. Large amounts of unprocessed raw gene expression profiles sometimes contribute to computational and analytic difficulties, including selecting dataset features and classifying them into an appropriate group or class. The large dimensions, lesser sample size, and noise in gene expression data make it difficult to attain good Alzheimer classification accuracy using the entire collection of genes. The categorization process necessitates careful feature reduction. As a result, a comprehensive review of microarray Alzheimer's disease studies is presented in this paper, focusing on feature selection techniques

    Using Machine Learning via Deep Learning Algorithms to Diagnose the Lung Disease Based on Chest Imaging: A Survey

    No full text
    — Chest imaging diagnostics is crucial in the medical area due to many serious lung diseases like cancers and nodules and particularly with the current pandemic of Covid-19. Machine learning approaches yield prominent results toward the task of diagnosis. Recently, deep learning methods are utilized and recommended by many studies in this domain. The research aims to critically examine the newest lung disease detection procedures using deep learning algorithms that use X-ray and CT scan datasets. Here, the most recent studies in this area (2015-2021) have been reviewed and summarized to provide an overview of the most appropriate methods that should be used or developed in future works, what limitations should be considered, and at what level these techniques help physicians in identifying the disease with better accuracy. The lack of various standard datasets, the huge training set, the high dimensionality of data, and the independence of features have been the main limitations based on the literature. However, different architectures of deep learning are used by many researchers but, Convolutional Neural Networks (CNN) are still state-of-art techniques in dealing with image datasets

    Fast discrimination of fake video manipulation

    No full text
    Deepfakes have become possible using artificial intelligence techniques, replacing one person’s face with another person’s face (primarily a public figure), making the latter do or say things he would not have done. Therefore, contributing to a solution for video credibility has become a critical goal that we will address in this paper. Our work exploits the visible artifacts (blur inconsistencies) which are generated by the manipulation process. We analyze focus quality and its ability to detect these artifacts. Focus measure operators in this paper include image Laplacian and image gradient groups, which are very fast to compute and do not need a large dataset for training. The results showed that i) the Laplacian group operators, as a value, may be lower or higher in the fake video than its value in the real video, depending on the quality of the fake video, so we cannot use them for deepfake detection and ii) the gradient-based measure (GRA7) decreases its value in the fake video in all cases, whether the fake video is of high or low quality and can help detect deepfake
    corecore